6 research outputs found

    An Eye Tracking Study to Investigate the Influence of Language and Text Direction on Multimedia

    Get PDF
    This study investigated how native language orientation influences spatial bias, first visual fixation on screen, first visual fixation on pictures, learning outcomes, and mental effort of learners. Previous studies supported the effect of native language writing or reading direction on spatial bias, examining written text and images created by the participants (Barrett et al., 2002; Boroditsky, 2001; Chatterjee, Southwood & Basiko, 1999; Spalek & Hammad, 2005). However, no study investigated writing direction in multimedia presentations using eye tracking. This study addresses this gap. A total of 84 participants completed the study forming four groups. The first group (NativeLeft_InstrEng) consisted of individuals whose native language is written from left to right and who have never experienced a right to left language. They received the material in English. The second group (NativeRight_InstrAra), whose native language is written from right to left, received the material in Arabic. The third group (NativeLeft_LrnRight_InstrEng) consists of individuals whose native language is written from left to right and who are learning or have learned a language written from right to left. They received the material in English. The fourth group (NativeRight_InstrEng), whose native language is written from right to left, received the material in English. Participants were asked to complete a survey that consisted of eight sections: demographic questions, self-estimate prior knowledge test, the instructional unit, mental effort rating, sentence forming questions, recalling questions, sequence question and finally, post-test questions. Eye tracking was used to detect first fixation on screen and pictures, and results were compared with participants’ written responses. Eye movements can be considered the blueprint for how students process the visual information (Underwood & Radach, 1998). Significant results for learning and spatial bias confirmed that spatial bias is associated with native language orientation such that the left-oriented learners were more likely to demonstrate left bias on the screen, while participants who were right-oriented demonstrated right bias. However, exposure to other languages, culture, or beliefs; or living for some time in a country which uses a language with a different orientation can influence learner’s spatial bias, as seen with group NativeRight_InstrEng. Finally, differences in visual fixations on screen and pictures were not significant perhaps due to the simplicity of pictures used in this study

    Development of a cloud-assisted classification technique for the preservation of secure data storage in smart cities

    Get PDF
    Cloud computing is the most recent smart city advancement, made possible by the increasing volume of heterogeneous data produced by apps. More storage capacity and processing power are required to process this volume of data. Data analytics is used to examine various datasets, both structured and unstructured. Nonetheless, as the complexity of data in the healthcare and biomedical communities grows, obtaining more precise results from analyses of medical datasets presents a number of challenges. In the cloud environment, big data is abundant, necessitating proper classification that can be effectively divided using machine language. Machine learning is used to investigate algorithms for learning and data prediction. The Cleveland database is frequently used by machine learning researchers. Among the performance metrics used to compare the proposed and existing methodologies are execution time, defect detection rate, and accuracy. In this study, two supervised learning-based classifiers, SVM and Novel KNN, were proposed and used to analyses data from a benchmark database obtained from the UCI repository. Initially, intrusions were detected using the SVM classification method. The proposed study demonstrated how the novel KNN used for distance capacity outperformed previous studies. The accuracy of the results of both approaches is evaluated. The results show that the intrusion detection system (IDS) with a 98.98% accuracy rate produces the best results when using the suggested system

    Improved Multimedia Object Processing for the Internet of Vehicles

    No full text
    The combination of edge computing and deep learning helps make intelligent edge devices that can make several conditional decisions using comparatively secured and fast machine learning algorithms. An automated car that acts as the data-source node of an intelligent Internet of vehicles or IoV system is one of these examples. Our motivation is to obtain more accurate and rapid object detection using the intelligent cameras of a smart car. The competent supervision camera of the smart automobile model utilizes multimedia data for real-time automation in real-time threat detection. The corresponding comprehensive network combines cooperative multimedia data processing, Internet of Things (IoT) fact handling, validation, computation, precise detection, and decision making. These actions confront real-time delays during data offloading to the cloud and synchronizing with the other nodes. The proposed model follows a cooperative machine learning technique, distributes the computational load by slicing real-time object data among analogous intelligent Internet of Things nodes, and parallel vision processing between connective edge clusters. As a result, the system increases the computational rate and improves accuracy through responsible resource utilization and active–passive learning. We achieved low latency and higher accuracy for object identification through real-time multimedia data objectification

    Recognition of Leaf Disease Using Hybrid Convolutional Neural Network by Applying Feature Reduction

    No full text
    Agriculture is crucial to the economic prosperity and development of India. Plant diseases can have a devastating influence towards food safety and a considerable loss in the production of agricultural products. Disease identification on the plant is essential for long-term agriculture sustainability. Manually monitoring plant diseases is difficult due to time limitations and the diversity of diseases. In the realm of agricultural inputs, automatic characterization of plant diseases is widely required. Based on performance out of all image-processing methods, is better suited for solving this task. This work investigates plant diseases in grapevines. Leaf blight, Black rot, stable, and Black measles are the four types of diseases found in grape plants. Several earlier research proposals using machine learning algorithms were created to detect one or two diseases in grape plant leaves; no one offers a complete detection of all four diseases. The photos are taken from the plant village dataset in order to use transfer learning to retrain the EfficientNet B7 deep architecture. Following the transfer learning, the collected features are down-sampled using a Logistic Regression technique. Finally, the most discriminant traits are identified with the highest constant accuracy of 98.7% using state-of-the-art classifiers after 92 epochs. Based on the simulation findings, an appropriate classifier for this application is also suggested. The proposed technique’s effectiveness is confirmed by a fair comparison to existing procedures

    Development of a cloud-assisted classification technique for the preservation of secure data storage in smart cities

    No full text
    Abstract Cloud computing is the most recent smart city advancement, made possible by the increasing volume of heterogeneous data produced by apps. More storage capacity and processing power are required to process this volume of data. Data analytics is used to examine various datasets, both structured and unstructured. Nonetheless, as the complexity of data in the healthcare and biomedical communities grows, obtaining more precise results from analyses of medical datasets presents a number of challenges. In the cloud environment, big data is abundant, necessitating proper classification that can be effectively divided using machine language. Machine learning is used to investigate algorithms for learning and data prediction. The Cleveland database is frequently used by machine learning researchers. Among the performance metrics used to compare the proposed and existing methodologies are execution time, defect detection rate, and accuracy. In this study, two supervised learning-based classifiers, SVM and Novel KNN, were proposed and used to analyses data from a benchmark database obtained from the UCI repository. Initially, intrusions were detected using the SVM classification method. The proposed study demonstrated how the novel KNN used for distance capacity outperformed previous studies. The accuracy of the results of both approaches is evaluated. The results show that the intrusion detection system (IDS) with a 98.98% accuracy rate produces the best results when using the suggested system

    Spectral–Spatial Features Exploitation Using Lightweight HResNeXt Model for Hyperspectral Image Classification

    Get PDF
    Hyperspectral image classification is vital for various remote sensing applications; however, it remains challenging due to the complex and high-dimensional nature of hyperspectral data. This paper introduces a novel approach to address this challenge by leveraging spectral and spatial features through a lightweight HResNeXt model. The proposed model is designed to overcome the limitations of traditional methods by combining residual connections and cardinality to enable efficient and effective feature extraction from hyperspectral images, capturing both spectral and spatial information simultaneously. Furthermore, the paper includes an in-depth analysis of the learned spectral–spatial features, providing valuable insights into the discriminative power of the proposed approach. The extracted features exhibit strong discriminative capabilities, enabling accurate classification even in challenging scenarios with limited training samples and complex spectral variations. Extensive experimental evaluations are conducted on four benchmark hyperspectral data sets, the Pavia university (PU), Kennedy Space Center (KSC), Salinas scene (SA), and Indian Pines (IP). The performance of the proposed method is compared with the state-of-the-art methods. The quantitative and visual results demonstrate the proposed approach’s high classification accuracy, noise robustness, and computational efficiency superiority. The HResNeXt obtained an overall accuracy on PU, KSC, SA, and IP, 99.46%, 81.46%, 99.75%, and 98.64%, respectively. Notably, the lightweight HResNeXt model achieves competitive results while requiring fewer computational resources, making it well-suited for real-time applications
    corecore